Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Clin Neuropsychol ; : 1-17, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38041021

RESUMO

Objective: To determine if similar levels of performance on the Overall Test Battery Mean (OTBM) occur at different forced choice test (FCT) p-value score failures. Second, to determine the OTBM levels that are associated with failures at above chance on various performance validity (PVT) tests. Method: OTBMs were computed from archival data obtained from four practices. We calculated each examinee's Estimated Premorbid Global Ability (EPGA) and OTBM. The sample size was 5,103 examinees with 282 (5.5%) of these scoring below chance at p ≤ .20 on at least one FCT. Results: The OTBM associated with a failure at p ≤ .20 was equivalent to the OTBM that was associated with failing 6 or more PVTs at above-chance cutoffs. The mean OTBMs relative to increasingly strict FCT p cutoffs were similar (T scores in the 30s). As expected, there was an inverse relationship between the number of PVTs failed and examinees' OTBMs. Conclusions: The data support the use of p ≤ .20 as the probability level for testing the significance of below chance performance on FCTs. The OTBM can be used to index the influence of invalid performance on outcomes, especially when an examinee scores below chance.

2.
Clin Neuropsychol ; 36(3): 523-545, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35043752

RESUMO

To provide education regarding the critical importance of test security for neuropsychological and psychological tests, and to establish recommendations for best practices for maintaining test security in forensic, clinical, teaching, and research settings. Previous test security guidelines were not adequately specified. METHOD: Neuropsychologists practicing in a broad range of settings collaborated to develop detailed and specific guidance regarding test security to best ensure continued viability of neuropsychological and psychological tests. Implications of failing to maintain test security for both the practice of neuropsychology and for society at large were identified. Types of test data that can be safely disclosed to nonpsychologists are described.Specific procedures can be followed that will minimize risk of invalidating future use of neuropsychological and psychological measures.Clinical neuropsychologists must commit to protecting sensitive neuropsychological and psychological test information from exposure to nonpsychologists, and now have specific recommendations that will guide that endeavor.


Assuntos
Academias e Institutos , Neuropsicologia , Humanos , Testes Neuropsicológicos , Estados Unidos
3.
Clin Neuropsychol ; 36(8): 2120-2134, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34632958

RESUMO

To determine if the number of participants with psychiatric disorders increased in association with failures on symptom validity tests (SVTs) and a performance validity test (PVT) in Veterans admitted for evaluation of possible seizures.The 254 participants were Veterans undergoing inpatient video-EEG monitoring for the diagnosis of possible seizures. DSM-IV psychiatric disorders were diagnosed with the SCID IV. Symptom exaggeration was assessed with the MMPI-2-RF and performance validity with the TOMM.On the MMPI-2-RF, 27.6%-32.7% showed symptom exaggeration. Participants who exaggerated on the MMPI-2-RF were more often diagnosed with psychiatric disorders. The TOMM was failed by 15.4% of the sample. Participants who failed the TOMM were more often diagnosed with an Axis I disorder but not with a personality disorder. The MMPI-2-RF was invalid in more cases than the TOMM, but 7.9% of the sample generated a valid MMPI-2-RF and an invalid TOMM.The correlational design does not allow conclusions about cause and effect. The invalid groups may have had a higher rate of psychopathology. The number of participants with psychiatric disorders increased in association with symptom exaggeration and performance invalidity. Symptom exaggeration was more frequent than performance invalidity, but the TOMM made a unique contribution to identification of invalidity. The routine clinical use of SVTs and PVTs is supported. The results also suggest the need for caution in diagnosing psychiatric disorders when there is symptom exaggeration or performance invalidity, because diagnostic validity is dependent on the accuracy of symptom reporting.


Assuntos
Transtornos Mentais , Veteranos , Humanos , Veteranos/psicologia , Simulação de Doença/diagnóstico , MMPI , Testes Neuropsicológicos , Exacerbação dos Sintomas , Reprodutibilidade dos Testes , Transtornos Mentais/diagnóstico , Convulsões , Eletroencefalografia
4.
Clin Neuropsychol ; 35(6): 1053-1106, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33823750

RESUMO

Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.


Assuntos
Simulação de Doença , Neuropsicologia , Academias e Institutos , Humanos , Motivação , Testes Neuropsicológicos , Estados Unidos
5.
Clin Neuropsychol ; 34(5): 919-936, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31698991

RESUMO

Objective: Neuropsychological evaluations include hold tests like word-reading ability as estimates of premorbid intellect thought to be resilient to the effects of neurologic insult. We tested the alternative hypothesis that exposure to concussion or repetitive subclinical head impacts throughout early life may stunt acquisition of word-reading skills.Method: Data were obtained from student-athletes within the CARE Consortium that completed the Wechsler Test of Adult Reading (WTAR). Measures of head trauma burden included self-reported concussion history and cumulative years of exposure to collision sports. We evaluated the effects of head trauma, sociodemographic (race, SES), and academic (SAT/ACT scores, learning disorder) variables on WTAR standard score using linear regression. Analyses were repeated in a football-only subsample estimating age of first exposure to football as a predictor.Results: We analyzed data from 6,598 participants (72.2% white, 39.6% female, mean ± SD age = 18.8 ± 1.2 years). Head trauma variables collectively explained 0.1% of the variance in WTAR standard scores, with years of collision sport exposure weakly predicting lower WTAR standard scores (ß = .026-.035, very small effect). In contrast, sociodemographic and academic variables collectively explained 20.9-22.5% of WTAR standard score variance, with strongest effects noted for SAT/ACT scores (ß = .313-.337, medium effect), LD diagnosis (ß = -.115 to -.131, small effect), and SES (ß = .101-.108, small effect). Age of first exposure to football did not affect WTAR scores in a football-only sample.Conclusion: Wechsler Test of Adult Reading performance appears unrelated to history of self-reported concussion(s) and/or repetitive subclinical head trauma exposure in current collegiate athletes. Sociodemographic and academic variables should be incorporated in test score interpretations for diverse populations like athletes.


Assuntos
Traumatismos em Atletas/diagnóstico , Concussão Encefálica/diagnóstico , Cognição/fisiologia , Testes Neuropsicológicos/normas , Leitura , Escalas de Wechsler/normas , Adolescente , Traumatismos em Atletas/psicologia , Concussão Encefálica/psicologia , Feminino , Humanos , Masculino
6.
Clin Neuropsychol ; 33(8): 1354-1372, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31111775

RESUMO

Objective: Discrimination of patients passing vs. failing the Word Memory Test (WMT) by performance on 11 performance and symptom validity tests (PVTs, SVTs) from the Meyers Neuropsychological Battery (MNB) at per-test false positive cutoffs ranging from 0 to 15%. PVT and SVT intercorrelation in subgroups passing and failing the WMT, as well as the degree of skew of the individual PVTs and SVT in the pass/fail subgroups, were also analyzed. Method: In 255 clinical and forensic cases, 100 failed and 155 passed the WMT, at a base-rate of invalid performance of 39.2%. Performance was contrasted on 10 PVTs and 1 SVT from the MNB, using per-test false positive rates of 0.0%, 3.3%, 5.0%, 10.0%, and 15.0% in discriminating WMT pass and WMT fail groups. These two WMT groups were also contrasted using the 10 PVTs and 1 SVT as continuous variables in a logistic regression. Results: The per-PVT false positive rate of 10% yielded the highest WMT pass/fail classification, and more closely approximated the classification obtained by logistic regression than other cut scores. PVT and SVT correlations were higher in cases failing the WMT, and data were more highly skewed in those passing the WMT. Conclusions: The optimal per-PVT and SVT cutoff is at a false positive rate of 10%, with failure of ≥3 PVTs/SVTs out of 11 yielding sensitivity of 61.0% and specificity of 90.3%. PVTs with the best classification had the greatest degree of skew in the WMT pass subgroup.


Assuntos
Testes Neuropsicológicos/normas , Projetos de Pesquisa , Adulto , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes
7.
Clin Neuropsychol ; 31(8): 1401-1405, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28994350

RESUMO

We reply to Nichols' (2017) critique of our commentary on the MMPI-2/MMPI-2-RF Symptom Validity Scale (FBS/FBS-r) as a measure of symptom exaggeration versus a measure of litigation response syndrome (LRS). Nichols claims that we misrepresented the thrust of the original paper he co-authored with Gass; namely, that they did not represent that the FBS/FBS-r were measures of LRS but rather, intended to convey that the FBS/RBS-r were indeterminate as to whether the scales measured LRS or measured symptom exaggeration. Our original commentary offered statistical support from published literature that (1) FBS/FBS-r were associated with performance validity test (PVT) failure, establishing the scales as measures of symptom exaggeration, and (2) persons in litigation who passed PVTs did not produce clinically significant elevations on the scales, contradicting that FBS/FBS-r were measures of LRS. In the present commentary, we draw a distinction between the psychometric data we present supporting the validity of FBS/FBS-r, and the conceptual, non-statistical arguments presented by Nichols, who does not refute our original empirically based conclusions.


Assuntos
MMPI , Simulação de Doença , Humanos , Masculino , Testes Neuropsicológicos , Psicometria , Reprodutibilidade dos Testes
8.
Clin Neuropsychol ; 31(8): 1387-1395, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28829224

RESUMO

OBJECTIVES: To address (1) Whether there is empirical evidence for the contention of Nichols and Gass that the MMPI-2/MMPI-2-RF FBS/FBS-r Symptom Validity Scale is a measure of Litigation Response Syndrome (LRS), representing a credible set of responses and reactions of claimants to the experience of being in litigation, rather than a measure of non-credible symptom report, as the scale is typically used; and (2) to address their stated concerns about the validity of FBS/FBS-r meta-analytic results, and the risk of false positive elevations in persons with bona-fide medical conditions. METHOD: Review of published literature on the FBS/FBS-r, focusing in particular on associations between scores on this symptom validity test and scores on performance validity tests (PVTs), and FBS/FBS-r score elevations in patients with genuine neurologic, psychiatric and medical problems. RESULTS: (1) several investigations show significant associations between FBS/FBS-r scores and PVTs measuring non-credible performance; (2) litigants who pass PVTs do not produce significant elevations on FBS/FBS-r; (3) non-litigating medical patients (bariatric surgery candidates, persons with sleep disorders, and patients with severe traumatic brain injury) who have multiple physical, emotional and cognitive symptoms do not produce significant elevations on FBS/FBS-r. Two meta-analytic studies show large effect sizes for FBS/FBS-r of similar magnitude. CONCLUSIONS: FBS/FBS-r measures non-credible symptom report rather than legitimate experience of litigation stress. Importantly, the absence of significant FBS/FBS-r elevations in litigants who pass PVTs demonstrating credible performance, directly contradicts the contention of Nichols and Gass that the scale measures LRS. These data, meta-analytic publications, and recent test use surveys support the admissibility of FBS/FBS-r under both Daubert and the older Frye criteria.


Assuntos
Simulação de Doença , Transtornos do Sono-Vigília , Humanos , MMPI , Testes Neuropsicológicos , Reprodutibilidade dos Testes
9.
Arch Clin Neuropsychol ; 31(4): 313-31, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27084732

RESUMO

OBJECTIVE: The objective is to examine failure on three embedded performance validity tests [Reliable Digit Span (RDS), Auditory Verbal Learning Test (AVLT) logistic regression, and AVLT recognition memory] in early Alzheimer disease (AD; n = 178), amnestic mild cognitive impairment (MCI; n = 365), and cognitively intact age-matched controls (n = 206). METHOD: Neuropsychological tests scores were obtained from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI). RESULTS: RDS failure using a ≤7 RDS threshold was 60/178 (34%) for early AD, 52/365 (14%) for MCI, and 17/206 (8%) for controls. A ≤6 RDS criterion reduced this rate to 24/178 (13%) for early AD, 15/365 (4%) for MCI, and 7/206 (3%) for controls. AVLT logistic regression probability of ≥.76 yielded unacceptably high false-positive rates in both clinical groups [early AD = 149/178 (79%); MCI = 159/365 (44%)] but not cognitively intact controls (13/206, 6%). AVLT recognition criterion of ≤9/15 classified 125/178 (70%) of early AD, 155/365 (42%) of MCI, and 18/206 (9%) of control scores as invalid, which decreased to 66/178 (37%) for early AD, 46/365 (13%) for MCI, and 10/206 (5%) for controls when applying a ≤5/15 criterion. Despite high false-positive rates across individual measures and thresholds, combining RDS ≤ 6 and AVLT recognition ≤9/15 classified only 9/178 (5%) of early AD and 4/365 (1%) of MCI patients as invalid performers. CONCLUSIONS: Embedded validity cutoffs derived from mixed clinical groups produce unacceptably high false-positive rates in MCI and early AD. Combining embedded PVT indicators lowers the false-positive rate.


Assuntos
Doença de Alzheimer/complicações , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia , Disfunção Cognitiva/complicações , Reações Falso-Positivas , Deficiências da Aprendizagem/diagnóstico , Deficiências da Aprendizagem/etiologia , Aprendizagem Verbal/fisiologia , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/diagnóstico , Análise de Variância , Disfunção Cognitiva/diagnóstico , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Valor Preditivo dos Testes , Índice de Gravidade de Doença
10.
Am Psychol ; 70(8): 779-788, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26618971

RESUMO

This article discusses construct and criterion validity of neuropsychological tests, as well as assessment validity, which allows determination of whether an individual examinee is producing valid test results. Factor analyses identify 6 domains of abilities. Tests of learning and memory and processing speed are most sensitive to presence of brain dysfunction in both traumatic brain injury (TBI) and Alzheimer's disease (AD). Tests of processing speed, working memory, verbal symbolic functions, and visuoperceptual and visuospatial judgment and problem solving are sensitive to the severity of TBI and AD, as well as to the functional consequences of these disorders, including ability to work, financial and medical decision-making capacities, and driving ability. Unilateral hemisphere stroke allows study of impairment in sensorimotor skills and lateralized neuropsychological abilities, as well as the moderating effects of aphasia and neglect on test performance. Assessment validity is determined by performance validity tests, measuring whether an examinee is providing an accurate measure of their actual level of ability, and symptom validity tests, measuring whether an examinee is providing an accurate report of their actual symptom experience. A core neuropsychological battery is described that includes tests with established construct and criterion validity, and assessment validity, for comprehensive evidence-based evaluation.


Assuntos
Transtornos Cognitivos/diagnóstico , Testes Neuropsicológicos , Cognição/fisiologia , Humanos , Memória/fisiologia , Resolução de Problemas/fisiologia
12.
Clin Neuropsychol ; 28(8): 1230-42, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25491180

RESUMO

Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.


Assuntos
Lesões Encefálicas/diagnóstico , Lesões Encefálicas/psicologia , Simulação de Doença/diagnóstico , Testes Neuropsicológicos/estatística & dados numéricos , Feminino , Humanos , Masculino
13.
Clin Neuropsychol ; 28(8): 1366-75, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25386898

RESUMO

A score that is significantly below the level of chance on a forced choice (FC) performance validity test results from the deliberate production of wrong answers. In order to increase the power of significance testing of a below chance result on standardized FC tests with empirically derived cutoff scores, we recommend using one-tailed tests of significance and selecting probability levels greater than .05 (.20 for most standardized FC tests with empirically derived cutoff scores). Under certain circumstances, we also recommend combining scores from different sections of the same FC test and combining scores across different FC tests. These recommendations require modifications when applied to non-standardized FC tests that lack empirically derived cutoff scores or to FC tests with a non-random topographical distribution of correct and incorrect answers.


Assuntos
Intenção , Simulação de Doença/diagnóstico , Testes Neuropsicológicos , Comportamento de Escolha , Humanos , Testes Neuropsicológicos/normas , Probabilidade , Reprodutibilidade dos Testes
14.
Arch Clin Neuropsychol ; 29(7): 695-714, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25280794

RESUMO

Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures.


Assuntos
Aptidão/fisiologia , Encefalopatias/diagnóstico , Testes Neuropsicológicos/normas , Desempenho Psicomotor/fisiologia , Reprodutibilidade dos Testes , Humanos
15.
Arch Clin Neuropsychol ; 29(4): 364-73, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24769887

RESUMO

Performance validity test (PVT) error rates using Monte Carlo simulation reported by Berthelson and colleagues (in False positive diagnosis of malingering due to the use of multiple effort tests. Brain Injury, 27, 909-916, 2013) were compared with PVT and symptom validity test (SVT) failure rates in two nonmalingering clinical samples. At a per-test false-positive rate of 10%, Monte Carlo simulation overestimated error rates for: (i) failure of ≥2 out of 5 PVTs/SVT for Larrabee (in Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410-425, 2003) and ACS (Pearson, Advanced clinical solutions for use with WAIS-IV and WMS-IV. San Antonio: Pearson Education, 2009) and (ii) failure of ≥2 out of 7 PVTs/SVT for Larrabee (Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410-425, 2003; Malingering scales for the Continuous Recognition Memory Test and Continuous Visual Memory Test. The Clinical Neuropsychologist, 23, 167-180, 2009 combined). Monte Carlo overestimation is likely because PVT performances are atypical in pattern or degree for what occurs in actual neurologic, psychiatric, or developmental disorders. Consequently, PVT scores form skewed distributions with performance at ceiling and restricted range, rather than forming a standard normal distribution with mean of 0 and standard deviation of 1.0. These results support the practice of using ≥2 PVT/SVT failures as representing probable invalid clinical presentation.


Assuntos
Lesões Encefálicas/diagnóstico , Lesões Encefálicas/psicologia , Simulação de Doença/diagnóstico , Simulação por Computador , Avaliação da Deficiência , Reações Falso-Positivas , Feminino , Escala de Coma de Glasgow , Humanos , Masculino , Simulação de Doença/etiologia , Transtornos da Memória , Método de Monte Carlo , Testes Neuropsicológicos , Sensibilidade e Especificidade
16.
Arch Clin Neuropsychol ; 29(1): 7-15, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24191968

RESUMO

We examined the effect of simulated head injury on scores on the Neurological Complaints (NUC) and Cognitive Complaints (COG) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF). Young adults with a history of mild head injury were randomly assigned to simulate head injury or give their best effort on a battery of neuropsychological tests, including the MMPI-2-RF. Simulators who also showed poor effort on performance validity tests (PVTs) were compared with controls who showed valid performance on PVTs. Results showed that both scales, but especially NUC, are elevated in individuals simulating head injury, with medium to large effect sizes. Although both scales were highly correlated with all MMPI-2-RF over-reporting validity scales, the relationship of Response Bias Scale to both NUC and COG was much stronger in the simulators than controls. Even accounting for over-reporting on the MMPI-2-RF, NUC was related to general somatic complaints regardless of group membership, whereas COG was related to both psychological distress and somatic complaints in the control group only. Neither scale was related to actual neuropsychological performance, regardless of group membership. Overall, results provide further evidence that self-reported cognitive symptoms can be due to many causes, not necessarily cognitive impairment, and can be exaggerated in a non-credible manner.


Assuntos
Lesões Encefálicas/complicações , Lesões Encefálicas/psicologia , Transtornos Cognitivos/etiologia , Doenças do Sistema Nervoso/etiologia , Simulação de Paciente , Personalidade/fisiologia , Adolescente , Avaliação da Deficiência , Feminino , Humanos , MMPI , Masculino , Simulação de Doença/diagnóstico , Reprodutibilidade dos Testes , Estatística como Assunto , Adulto Jovem
17.
Behav Sci Law ; 31(6): 686-701, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24105915

RESUMO

The diagnosis and evaluation of mild traumatic brain injury (mTBI) is reviewed from the perspective of meta-analyses of neuropsychological outcome, showing full recovery from a single, uncomplicated mTBI by 90 days post-trauma. Persons with history of complicated mTBI characterized by day-of-injury computed tomography or magnetic resonance imaging abnormalities, and those who have suffered prior mTBIs may or may not show evidence of complete recovery similar to that experienced by persons suffering a single, uncomplicated mTBI. Persistent post-concussion syndrome (PCS) is considered as a somatoform presentation, influenced by the non-specificity of PCS symptoms which commonly occur in non-TBI samples and co-vary as a function of general life stress, and psychological factors including symptom expectation, depression and anxiety. A model is presented for forensic evaluation of the individual mTBI case, which involves open-ended interview, followed by structured interview, record review, and detailed neuropsychological testing. Differential diagnosis includes consideration of other neurologic and psychiatric disorders, symptom expectation, diagnosis threat, developmental disorders, and malingering.


Assuntos
Lesões Encefálicas/diagnóstico , Neuropsicologia , Lesões Encefálicas/fisiopatologia , Lesões Encefálicas/reabilitação , Diagnóstico Diferencial , Medicina Legal , Humanos , Metanálise como Assunto , Resultado do Tratamento
18.
Clin Neuropsychol ; 27(2): 215-37, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23414416

RESUMO

Bigler et al. (2013, The Clinical Neuropsychologist) contend that weak methodology and poor quality of the studies comprising our recent meta-analysis led us to miss detecting a subgroup of mild traumatic brain injury (mTBI) characterized by persisting symptomatic complaint and positive biomarkers for neurological damage. Our computation of non-significant Q, tau(2), and I(2) statistics contradicts the existence of a subgroup of mTBI with poor outcome, or variation in effect size as a function of quality of research design. Consistent with this conclusion, the largest single contributor to our meta-analysis, Dikmen, Machamer, Winn, and Temkin (1995, Neuropsychology, 9, 80) yielded an effect size, -0.02, that was smaller than our overall effect size of -0.07 despite using the most liberal definition of mTBI: loss of consciousness less than 1 hour, with no exclusion of subjects who had positive CT scans. The evidence is weak for biomarkers of mTBI, such as diffusion tensor imaging and for demonstrable neuropathology in uncomplicated mTBI. Postconcussive symptoms, and reduced neuropsychological test scores are not specific to mTBI but can result from pre-existing psychosocial and psychiatric problems, expectancy effects and diagnosis threat. Moreover, neuropsychological impairment is seen in a variety of primary psychiatric disorders, which themselves are predictive of persistent complaints following mTBI. We urge use of prospective studies with orthopedic trauma controls in future investigations of mTBI to control for these confounding factors.


Assuntos
Lesões Encefálicas/complicações , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia , Transtornos da Memória/diagnóstico , Testes Neuropsicológicos , Feminino , Humanos , Masculino
19.
J Int Neuropsychol Soc ; 18(4): 625-30, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23057079

RESUMO

Failure to evaluate the validity of an examinee's neuropsychological test performance can alter prediction of external criteria in research investigations, and in the individual case, result in inaccurate conclusions about the degree of impairment resulting from neurological disease or injury. The terms performance validity referring to validity of test performance (PVT), and symptom validity referring to validity of symptom report (SVT), are suggested to replace less descriptive terms such as effort or response bias. Research is reviewed demonstrating strong diagnostic discrimination for PVTs and SVTs, with a particular emphasis on minimizing false positive errors, facilitated by identifying performance patterns or levels of performance that are atypical for bona fide neurologic disorder. It is further shown that false positive errors decrease, with a corresponding increase in the positive probability of malingering, when multiple independent indicators are required for diagnosis. The rigor of PVT and SVT research design is related to a high degree of reproducibility of results, and large effect sizes of d = 1.0 or greater, exceeding effect sizes reported for several psychological and medical diagnostic procedures.


Assuntos
Testes Neuropsicológicos/normas , Interpretação Estatística de Dados , Avaliação da Deficiência , Reações Falso-Positivas , Humanos , Funções Verossimilhança , Simulação de Doença/diagnóstico , Simulação de Doença/psicologia , Transtornos da Memória/diagnóstico , Transtornos da Memória/psicologia , Reprodutibilidade dos Testes
20.
J Int Neuropsychol Soc ; 18(4): 630-1, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23057083
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...